Cos 513 : Foundations of Probabilistic Modeling Lecture 6

نویسندگان

  • WONHO KIM
  • CHRIS PARK
چکیده

(1) p(x) = ∏ C ΨC(xC) ∏ S ΦS(xS) where ΨC(xC ) is the potential for a clique C , and ΦS(xS) is the potential for a separator S. After Junction Tree algorithm, the clique potential ΨC(xC) becomes the marginal probability for clique C . Thus, we are able to achieve a representation that is a product of marginals, and yet is also a representation of the joint probability. After we initialize maximal cliques, we first initialize the separator potentials to 1. The good thing is that we maintain global consistency by setting separator potentials to one. The basic idea of Junction Tree algorithm is that we adjust clique potentials ΨC(xC) to obtain marginals (local marginals), and adjust separator potentials ΦS(xS) to maintain the joint probability distribution p(x).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lecture 15: Learning probabilistic models

In the first half of the course, we introduced backpropagation, a technique we used to train neural nets to minimize a variety of cost functions. One of the cost functions we discussed was cross-entropy, which encourages the network to learn to predict a probability distribution over the targets. This was our first glimpse into probabilistic modeling. But probabilistic modeling is so important ...

متن کامل

Lecture 17: Learning probabilistic models

In the first half of the course, we introduced backpropagation, a technique we used to train neural nets to minimize a variety of cost functions. One of the cost functions we discussed was cross-entropy, which encourages the network to learn to predict a probability distribution over the targets. This was our first glimpse into probabilistic modeling. But probabilistic modeling is so important ...

متن کامل

Lecture 18: Learning probabilistic models

In the first half of the course, we introduced backpropagation, a technique we used to train neural nets to minimize a variety of cost functions. One of the cost functions we discussed was cross-entropy, which encourages the network to learn to predict a probability distribution over the targets. This was our first glimpse into probabilistic modeling. But probabilistic modeling is so important ...

متن کامل

Lecture 3b: Modeling of Rotational Mechanical Systems

The objective of this lecture is to review the basic building blocks of lumped parameter rotational mechanical systems and to build the foundations that will enable you to model more complex dynamic systems consisting of translation and rotational elements. Corresponding to the translational elements mass, spring and damper, are the rotational inertia, rotational spring and rotational damper, r...

متن کامل

Math 140a: Foundations of Real Analysis I

1. Ordered Sets, Ordered Fields, and Completeness 1 1.1. Lecture 1: January 5, 2016 1 1.2. Lecture 2: January 7, 2016 4 1.3. Lecture 3: January 11, 2016 7 1.4. Lecture 4: January 14, 2014 9 2. Sequences and Limits 13 2.1. Lecture 5: January 19, 2016 13 2.2. Lecture 6: January 21, 2016 15 2.3. Lecture 7: January 26, 2016 18 2.4. Lecture 8: January 28, 2016 21 3. Extensions of R: the Extended Rea...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009